Goto

Collaborating Authors

 thoughtful review


We are very grateful to the reviewers for their insightful, thorough, and thoughtful reviews

Neural Information Processing Systems

We are very grateful to the reviewers for their insightful, thorough, and thoughtful reviews. We will also include additional low-level details in the paper. The models are indeed inferior in terms of latency on current hardware. SOT A in [1] is 73.1 % for 10% and 74.9 % for 20% . As we discuss briefly in section 2.4, for the large scale experiments we consider a separate The number of output nodes is the channels that MobileNetV1 has for that resolution.


We thank all four reviewers for their thoughtful reviews, and are happy that they value the contribution of a new

Neural Information Processing Systems

's suggestion to draw the external nodes on the left-hand side differs from But this wouldn't be as flexible as we'd like; for example, we'd like to query a HMM for the But conjunction is an operation on FGGs, not factor graphs, so at the time of conjunction, no renaming has taken place. We agree that the notation should be improved and will think about how to do so. Lemma 15 does not change the generated graphs and cannot change their treewidth. We mean that an FGG can't generate the


We thank all four reviewers for their thoughtful reviews, and are happy that they value the contribution of a new

Neural Information Processing Systems

's suggestion to draw the external nodes on the left-hand side differs from But this wouldn't be as flexible as we'd like; for example, we'd like to query a HMM for the But conjunction is an operation on FGGs, not factor graphs, so at the time of conjunction, no renaming has taken place. We agree that the notation should be improved and will think about how to do so. Lemma 15 does not change the generated graphs and cannot change their treewidth. We mean that an FGG can't generate the


We are very grateful to the reviewers for their insightful, thorough, and thoughtful reviews

Neural Information Processing Systems

We are very grateful to the reviewers for their insightful, thorough, and thoughtful reviews. We will also include additional low-level details in the paper. The models are indeed inferior in terms of latency on current hardware. R2: Resolution & Stride: As we discuss briefly in section 2.4, for the large scale experiments we consider a separate The number of output nodes is the channels that MobileNetV1 has for that resolution. Thank you for the relevant papers which we will include.